187 research outputs found

    The linear growth rate of structure in Parametrized Post Friedmannian Universes

    Full text link
    A possible solution to the dark energy problem is that Einstein's theory of general relativity is modified. A suite of models have been proposed that, in general, are unable to predict the correct amount of large scale structure in the distribution of galaxies or anisotropies in the Cosmic Microwave Background. It has been argued, however, that it should be possible to constrain a general class of theories of modified gravity by focusing on properties such as the growing mode, gravitational slip and the effective, time varying Newton's constant. We show that assuming certain physical requirements such as stability, metricity and gauge invariance, it is possible to come up with consistency conditions between these various parameters. In this paper we focus on theories which have, at most, 2nd derivatives in the metric variables and find restrictions that shed light on current and future experimental constraints without having to resort to a (as yet unknown) complete theory of modified gravity. We claim that future measurements of the growth of structure on small scales (i.e. from 1-200 h^{-1} Mpc) may lead to tight constraints on both dark energy and modified theories of gravity.Comment: 15 Pages, 11 Figure

    Parametrized post-Friedmannian framework for interacting dark energy theories

    Get PDF
    We present the most general parametrization of models of dark energy in the form of a scalar field which is explicitly coupled to dark matter. We follow and extend the parametrized post-Friedmannian approach, previously applied to modified gravity theories, in order to include interacting dark energy. We demonstrate its use through a number of worked examples and show how the initially large parameter space of free functions can be significantly reduced and constrained to include only a few nonzero coefficients. This paves the way for a model-independent approach to classify and test interacting dark energy theories

    The initial conditions of the universe: how much isocurvature is allowed?

    Full text link
    We investigate the constraints imposed by the current data on correlated mixtures of adiabatic and non-adiabatic primordial perturbations. We discover subtle flat directions in parameter space that tolerate large (~60%) contributions of non-adiabatic fluctuations. In particular, larger values of the baryon density and a spectral tilt are allowed. The cancellations in the degenerate directions are explored and the role of priors elucidated.Comment: 4 pages, 4 figures. Submitted to PR

    Constraints on isocurvature models from the WMAP first-year data

    Full text link
    We investigate the constraints imposed by the first-year WMAP CMB data extended to higher multipole by data from ACBAR, BOOMERANG, CBI and the VSA and by the LSS data from the 2dF galaxy redshift survey on the possible amplitude of primordial isocurvature modes. A flat universe with CDM and Lambda is assumed, and the baryon, CDM (CI), and neutrino density (NID) and velocity (NIV) isocurvature modes are considered. Constraints on the allowed isocurvature contributions are established from the data for various combinations of the adiabatic mode and one, two, and three isocurvature modes, with intermode cross-correlations allowed. Since baryon and CDM isocurvature are observationally virtually indistinguishable, these modes are not considered separately. We find that when just a single isocurvature mode is added, the present data allows an isocurvature fraction as large as 13+-6, 7+-4, and 13+-7 percent for adiabatic plus the CI, NID, and NIV modes, respectively. When two isocurvature modes plus the adiabatic mode and cross-correlations are allowed, these percentages rise to 47+-16, 34+-12, and 44+-12 for the combinations CI+NID, CI+NIV, and NID+NIV, respectively. Finally, when all three isocurvature modes and cross-correlations are allowed, the admissible isocurvature fraction rises to 57+-9 per cent. The sensitivity of the results to the choice of prior probability distribution is examined.Comment: 20 pages, 24 figures. Submitted to PR

    Probing the Reionization History of the Universe using the Cosmic Microwave Background Polarization

    Get PDF
    The recent discovery of a Gunn--Peterson (GP) trough in the spectrum of the redshift 6.28 SDSS quasar has raised the tantalizing possibility that we have detected the reionization of the universe. However, a neutral fraction (of hydrogen) as small as 0.1% is sufficient to cause the GP trough, hence its detection alone cannot rule out reionization at a much earlier epoch. The Cosmic Microwave Background (CMB) polarization anisotropy offers an alternative way to explore the dark age of the universe. We show that for most models constrained by the current CMB data and by the discovery of a GP trough (showing that reionization occurred at z > 6.3), MAP can detect the reionization signature in the polarization power spectrum. The expected 1-sigma error on the measurement of the electron optical depth is around 0.03 with a weak dependence on the value of that optical depth. Such a constraint on the optical depth will allow MAP to achieve a 1-sigma error on the amplitude of the primordial power spectrum of 6%. MAP with two years (Planck with one year) of observation can distinguish a model with 50% (6%) partial ionization between redshifts of 6.3 and 20 from a model in which hydrogen was completely neutral at redshifts greater than 6.3. Planck will be able to distinguish between different reionization histories even when they imply the same optical depth to electron scattering for the CMB photons.Comment: ApJ version. Added Figure 2 and reference

    Ambiguous Tests of General Relativity on Cosmological Scales

    Full text link
    There are a number of approaches to testing General Relativity (GR) on linear scales using parameterized frameworks for modifying cosmological perturbation theory. It is sometimes assumed that the details of any given parameterization are unimportant if one uses it as a diagnostic for deviations from GR. In this brief report we argue that this is not necessarily so. First we show that adopting alternative combinations of modifications to the field equations significantly changes the constraints that one obtains. In addition, we show that using a parameterization with insufficient freedom significantly tightens the apparent theoretical constraints. Fundamentally we argue that it is almost never appropriate to consider modifications to the perturbed Einstein equations as being constraints on the effective gravitational constant, for example, in the same sense that solar system constraints are. The only consistent modifications are either those that grant near-total freedom, as in decomposition methods, or ones which map directly to a particular part of theory space

    The Tensor-Vector-Scalar theory and its cosmology

    Full text link
    Over the last few decades, astronomers and cosmologists have accumulated vast amounts of data clearly demonstrating that our current theories of fundamental particles and of gravity are inadequate to explain the observed discrepancy between the dynamics and the distribution of the visible matter in the Universe. The Modified Newtonian Dynamics (MOND) proposal aims at solving the problem by postulating that Newton's second law of motion is modified for accelerations smaller than ~10^{-10}m/s^2. This simple amendment, has had tremendous success in explaining galactic rotation curves. However, being non-relativistic, it cannot make firm predictions for cosmology. A relativistic theory called Tensor-Vector-Scalar (TeVeS) has been proposed by Bekenstein building on earlier work of Sanders which has a MOND limit for non-relativistic systems. In this article I give a short introduction to TeVeS theory and focus on its predictions for cosmology as well as some non-cosmological studies.Comment: 44 pages, topical review for Classical and Quantum Gravit

    Mechanical robustness of HL-LHC collimator designs

    Get PDF
    Two new absorbing materials were developed as collimator inserts to fulfil the requirements of HL-LHC higher brightness beams: molybdenum-carbide graphite (MoGr) and copper-diamond (CuCD). These materials were tested under intense beam impacts at CERN HiRadMat facility in 2015, when full jaw prototypes were irradiated. Additional tests in HiRadMat were performed in 2017 on another series of material samples, including also improved grades of MoGr and CuCD, and different coating solutions. This paper summarizes the main results of the two experiments, with a main focus on the behaviour of the novel composite blocks, the metallic housing, as well as the cooling circuit. The experimental campaign confirmed the final choice for the materials and the design solutions for HL-LHC collimators, and constituted a unique chance of benchmarking numerical models. In particular, the tests validated the selection of MoGr for primary and secondary collimators, and CuCD as a valid solution for robust tertiary collimators

    Testing Beam-Induced Quench Levels of LHC Superconducting Magnets

    Full text link
    In the years 2009-2013 the Large Hadron Collider (LHC) has been operated with the top beam energies of 3.5 TeV and 4 TeV per proton (from 2012) instead of the nominal 7 TeV. The currents in the superconducting magnets were reduced accordingly. To date only seventeen beam-induced quenches have occurred; eight of them during specially designed quench tests, the others during injection. There has not been a single beam- induced quench during normal collider operation with stored beam. The conditions, however, are expected to become much more challenging after the long LHC shutdown. The magnets will be operating at near nominal currents, and in the presence of high energy and high intensity beams with a stored energy of up to 362 MJ per beam. In this paper we summarize our efforts to understand the quench levels of LHC superconducting magnets. We describe beam-loss events and dedicated experiments with beam, as well as the simulation methods used to reproduce the observable signals. The simulated energy deposition in the coils is compared to the quench levels predicted by electro-thermal models, thus allowing to validate and improve the models which are used to set beam-dump thresholds on beam-loss monitors for Run 2.Comment: 19 page
    • …
    corecore